近年来,在自学学习(SSL)方面取得了重大成功,这有助于各种下游任务。但是,攻击者可能会窃取此类SSL模型并将其商业化以获利,这对于保护其知识产权(IP)至关重要。大多数现有的IP保护解决方案都是为监督学习模型而设计的,不能直接使用,因为它们要求模型的下游任务和目标标签在水印嵌入过程中已知并获得,这在SSL的域中并非总是可以的。为了解决此类问题,尤其是在水印嵌入过程中下游任务多样化且未知时,我们提出了一种新型的黑盒水印解决方案,名为SSL-WM,以保护SSL模型的所有权。 SSL-WM将水印编码器的水印输入映射到不变的表示空间中,该空间会导致任何下游分类器产生预期的行为,从而允许检测到嵌入式水印。我们使用不同的SSL模型(包括基于对比度和基于生成的生成型)来评估许多任务,例如计算机视觉(CV)和自然语言处理(NLP)等许多任务。实验结果表明,SSL-WM可以有效地验证各种下游任务中被盗SSL模型的所有权。此外,SSL-WM对模型进行微调和修剪攻击非常强大。最后,SSL-WM还可以从评估的水印检测方法中逃避检测,从而证明了其在保护SSL模型IP时的有希望的应用。
translated by 谷歌翻译
神经网络的出现彻底改变了运动合成领域。然而,学会从给定的分布中无条件合成动作仍然是一项具有挑战性的任务,尤其是当动作高度多样化时。我们提出了Modi,这是一种无条件的生成模型,可以合成各种动作。我们的模型在完全无监督的环境中训练,从多样化,非结构化和未标记的运动数据集中进行了训练,并产生了一个行为良好,高度语义的潜在空间。我们的模型的设计遵循StyleGAN的多产架构,并将其两个关键技术组件调整为运动域:一组样式编码,这些样式编码注入了生成器层次结构的每个级别和映射功能,并形成了一个学习和形成一个分离的潜在空间。我们表明,尽管数据集中缺乏任何结构,但潜在空间可以在语义上聚集,并促进语义编辑和运动插值。此外,我们提出了一种将未见动作转向潜在空间的技术,并展示了基于潜在的运动编辑操作,否则这些动作无法通过天真地操纵明确的运动表示无法实现。我们的定性和定量实验表明,我们的框架达到了最新的合成质量,可以遵循高度多样化的运动数据集的分布。代码和训练有素的模型将在https://sigal-raab.github.io/modi上发布。
translated by 谷歌翻译
准确的蛋白质结合亲和力预测在药物设计和许多其他分子识别问题中至关重要。尽管基于机器学习技术的亲和力预测取得了许多进步,但由于蛋白质 - 配体结合取决于原子和分子的动力学,它们仍然受到限制。为此,我们策划了一个包含3,218个动态蛋白质配合物的MD数据集,并进一步开发了DynaFormer,这是一个基于图的深度学习框架。 DynaFormer可以通过考虑相互作用的各种几何特征来完全捕获动态结合规则。我们的方法显示出优于迄今报告的方法。此外,我们通过将模型与基于结构的对接整合在一起,对热休克蛋白90(HSP90)进行了虚拟筛选。我们对其他基线进行了基准测试,表明我们的方法可以鉴定具有最高实验效力的分子。我们预计大规模的MD数据集和机器学习模型将形成新的协同作用,为加速药物发现和优化提供新的途径。
translated by 谷歌翻译
随着深度神经网络(DNN)的广泛应用,后门攻击逐渐引起了人们的关注。后门攻击是阴险的,中毒模型在良性样本上的表现良好,只有在给定特定输入时才会触发,这会导致神经网络产生不正确的输出。最先进的后门攻击工作是通过数据中毒(即攻击者注入中毒样品中的数据集中)实施的,并且用该数据集训练的模型被后门感染。但是,当前研究中使用的大多数触发因素都是在一小部分图像上修补的固定图案,并且经常被明显错误地标记,这很容易被人类或防御方法(例如神经清洁和前哨)检测到。同样,DNN很难在没有标记的情况下学习,因为它们可能会忽略小图案。在本文中,我们提出了一种基于频域的广义后门攻击方法,该方法可以实现后门植入而不会错标和访问训练过程。它是人类看不见的,能够逃避常用的防御方法。我们在三个数据集(CIFAR-10,STL-10和GTSRB)的无标签和清洁标签案例中评估了我们的方法。结果表明,我们的方法可以在所有任务上实现高攻击成功率(高于90%),而不会在主要任务上进行大量绩效降解。此外,我们评估了我们的方法的旁路性能,以进行各种防御措施,包括检测训练数据(即激活聚类),输入的预处理(即过滤),检测输入(即Sentinet)和检测模型(即神经清洁)。实验结果表明,我们的方法对这种防御能力表现出极好的鲁棒性。
translated by 谷歌翻译
最近,变压器架构已经证明了其在自然语言处理(NLP)和计算机视觉(CV)任务中的重要性。虽然已知其他网络模型容易受到后门攻击的影响,但是在模型中嵌入触发器并在呈现触发器时控制模型行为,众所周知,这种攻击是否仍然在变压器模型上仍然有效,如果是的话,是否有效它可以以更具成本效益的方式完成。在本文中,我们提出DBIA,一种对CV导向的变压器网络的一种新型无数据响应攻击,利用变压器的固有注意机制来产生触发器并使用中毒代理数据集注入后门。我们在两个主流图像分类任务中基于三个基准变压器,即Vit,Deit和Swin变压器进行了广泛的实验..,Cifar10和ImageNet。评估结果表明,消耗较少的资源,我们的方法可以嵌入高层的成功率和对受害者变压器性能的低影响。我们的代码可在https://anonmous.4open.science/r/dbia-825d获得。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译